10 research outputs found

    The Atapuerca sites and the Ibeas hominids

    Get PDF
    The Atapuerca railway Trench and Ibeas sites near Burgos, Spain, are cave fillings that include a series of deposits ranging from below the Matuyama/Bruhnes reversal up to the end of Middle Pleistocene. The lowest fossil-bearing bed in the Trench contains an assemblage of large and small Mammals including Mimomys savini, Pitymys gregaloides, Pliomys episcopalis, Crocuta crocuta, Dama sp. and Megacerini; the uppermost assemblage includes Canis lupus, Lynx spelaea, Panthera (Leo) fossilis, Felis sylvestris, Equus caballus steinheimensis, E.c. germanicus, Pitymys subtenaneus, Microtus arvalis agrestis, Pliomys lenki, and also Panthera toscana, Dicerorhinus bemitoechus, Bison schoetensacki, which are equally present in the lowest level. The biostratigraphic correlation and dates of the sites are briefly discussed, as are the paleoclimatic interpretation of the Trench sequences. Stone artifacts are found in several layers; the earliest occurrences correspond to the upper beds containing Mimomys savini. A set of preserved human occupation floors has been excavated in the top fossil-bearing beds. The stone-tool assemblages of the upper levels are of upper-medial Acheulean to Charentian tradition. The rich bone breccia SH, in the Cueva Mayor-Cueva del Silo, Ibeas de Juarros, is a derived deposit, due to a mud flow that dispersed and carried the skeletons of many carnivores and humans. The taxa represented are: Vrsus deningeri (largely dominant), Panthera (Leo) fossilis, Vulpes vulpes, Homo sapiens var. Several traits of both mandibular and cranial remains are summarized. Preliminary attempts at dating suggest that the Ibeas fossil man is older than the Last Interglacial, or oxygen-isotope stage 5

    Intrinsic Textures for Relightable Free-Viewpoint Video

    Get PDF
    This paper presents an approach to estimate the intrinsic texture properties (albedo, shading, normal) of scenes from multiple view acquisition under unknown illumination conditions. We introduce the concept of intrinsic textures, which are pixel-resolution surface textures representing the intrinsic appearance parameters of a scene. Unlike previous video relighting methods, the approach does not assume regions of uniform albedo, which makes it applicable to richly textured scenes. We show that intrinsic image methods can be used to refine an initial, low-frequency shading estimate based on a global lighting reconstruction from an original texture and coarse scene geometry in order to resolve the inherent global ambiguity in shading. The method is applied to relighting of free-viewpoint rendering from multiple view video capture. This demonstrates relighting with reproduction of fine surface detail. Quantitative evaluation on synthetic models with textured appearance shows accurate estimation of intrinsic surface reflectance properties. © 2014 Springer International Publishing

    Estimation of intrinsic image sequences from image+depth video

    No full text
    10.1007/978-3-642-33783-3_24Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)7577 LNCSPART 6327-34

    Correlation-based intrinsic image extraction from a single image

    No full text
    Abstract. Intrinsicimagesrepresenttheunderlyingpropertiesofascene such as illumination (shading) and surface reflectance. Extracting intrinsic images is a challenging, ill-posed problem. Human performance on tasks such as shadow detection and shape-from-shading is improved by adding colour and texture to surfaces. In particular, when a surface is painted with a textured pattern, correlations between local mean luminance and local luminance amplitude promote the interpretation of luminance variations as illumination changes. Based on this finding, we propose a novel feature, local luminance amplitude, to separate illumination and reflectance, and a framework to integrate this cue with hue and texture to extract intrinsic images. The algorithm uses steerable filters to separate images into frequency and orientation components and constructs shading and reflectance images from weighted combinations of these components. Weights are determined by correlations between correspondingvariations inlocal luminance, local amplitude,colour andtexture. The intrinsic images are further refined by ensuring the consistency of local texture elements. We test this method on surfaces photographed under different lighting conditions. The effectiveness of the algorithm is demonstrated bythecorrelation between ourintrinsic images andground truth shading and reflectance data. Luminance amplitude was found to be a useful cue. Results are also presented for natural images.

    Estimating intrinsic images from image sequences with biased illumination

    No full text
    Abstract. We present a method for estimating intrinsic images from a fixed-viewpoint image sequence captured under changing illumination directions. Previous work on this problem reduces the influence of shadows on reflectance images, but does not address shading effects which can significantly degrade reflectance image estimation under the typically biased sampling of illumination directions. In this paper, we describe how biased illumination sampling leads to biased estimates of reflectance image derivatives. To avoid the effects of illumination bias, we propose a solution that explicitly models spatial and temporal constraints over the image sequence. With this constraint network, our technique minimizes a regularization function that takes advantage of the biased image derivatives to yield reflectance images less influenced by shading.

    Intrinsic Video

    No full text
    Intrinsic images such as albedo and shading are valuable for later stages of visual processing. Previous methods for extracting albedo and shading use either single images or images together with depth data. Instead, we define intrinsic video estimation as the problem of extracting temporally coherent albedo and shading from video alone. Our approach exploits the assumption that albedo is constant over time while shading changes slowly. Optical flow aids in the accurate estimation of intrinsic video by providing temporal continuity as well as putative surface boundaries. Additionally, we find that the estimated albedo sequence can be used to improve optical flow accuracy in sequences with changing illumination. The approach makes only weak assumptions about the scene and we show that it substantially outperforms existing single-frame intrinsic image methods. We evaluate this quantitatively on synthetic sequences as well on challenging natural sequences with complex geometry, motion, and illumination
    corecore